274 research outputs found

    A Monte Carlo Investigation of Some Tests for Stochastic Dominance

    Get PDF
    This paper compares the performance of several tests for stochastic dominance up to order three using Monte Carlo methods. The tests considered are the Davidson and Duclos (2000) test, the Anderson test (1996) and the Kaur, Rao and Singh (1994) test. Only unpaired samples of independent observations are considered, as this is a restriction for both the Anderson and Kaur-Rao-Singh tests. We find that the Davidson-Duclos test appears to be the best. The Kaur-Rao-Singh test is overly conservative and does not compare favorably against the Davidson-Duclos and Anderson tests in terms of power.Burr distribution, Income distribution, Monte Carlo method, Portfolio investment, Stochastic dominance, Union-intersection test

    Assessing Dependence Changes in the Asian Financial Market Returns Using Plots Based on Nonparametric Measures

    Get PDF
    This paper investigates whether or not there are significant changes in the dependence between the Thai equity market and six Asian markets - namely, Singaporean, Malaysian, Hong Kong, Korean, Indonesian and Taiwanese markets - due to 1997-July financial crisis. If so, this may be an indication that the underlying bivariate joint distributions capturing the dependence between the Thai market and these six markets have changed. We employ the chi-plot proposed by Fisher and Switzer (2001) and the Kendall plot proposed by Genest and Boies (2003) to examine the dependence in these six markets for the pre- and post-1997 financial crisis periods. We find that marginal distributions of all seven markets have notably changed due to this financial crisis, and that the functional forms of the underlying joint distributions generating the dependence in the Korean, Indonesian and Taiwan markets have also changed for the post-crisis period. It appears that the same parametric copula can capture the dependence in the Singapore, Malaysia and Hong Kong markets for both pre- and post-crisis periods, and that only the tail indices of bivariate distributions between the Thai and these three markets have changed. It is interesting to observe that the same conclusions can be drawn using both chi- and Kendall plots.chi-plot, copula, dependence, Kendall-plot

    Influence Diagnostics in GARCH Processes

    Get PDF
    Influence diagnostics have become an important tool for statistical analysis since the seminal work by Cook (1986). In this paper we present a curvature-based diagnostic to access local influence of minor perturbations on the modified likelihood displacement in a regression model. Using the proposed diagnostic, we study the local influence in the GARCH model under two perturbation schemes which involve, respectively, model perturbation and data perturbation. We find that the curvature-based diagnostic often provides more information on the local influence being examined than the slope-based diagnostic, especially when the GARCH model is under investigation. An empirical study involving GARCH modeling of the percentage daily returns of the NYSE composite index illustrates the effectiveness of the proposed diagnostic and shows that the curvature-based diagnostic may provide information that cannot be uncovered by the slope-based diagnostic. We find that the effect or influence of each observation is not invariant across different perturbation schemes, thus it is advisable to study the local influence under different perturbation schemes through curvature-based diagnostics.Normal curvature, modified likelihood displacement, GARCH models.

    Bayesian semiparametric GARCH models

    Get PDF
    This paper aims to investigate a Bayesian sampling approach to parameter estimation in the semiparametric GARCH model with an unknown conditional error density, which we approximate by a mixture of Gaussian densities centered at individual errors and scaled by a common standard deviation. This mixture density has the form of a kernel density estimator of the errors with its bandwidth being the standard deviation. The proposed investigation is motivated by the lack of robustness in GARCH models with any parametric assumption of the error density for the purpose of error-density based inference such as value-at-risk (VaR) estimation. The contribution of the paper is to construct the likelihood and posterior of model and bandwidth parameters under the proposed mixture error density, and to forecast the one-step out-of-sample density of asset returns. The resulting VaR measure therefore would be distribution-free. Applying the semiparametric GARCH(1,1) model to daily stock-index returns in eight stock markets, we find that this semiparametric GARCH model is favoured against the GARCH(1,1) model with Student t errors for five indices, and that the GARCH model underestimates VaR compared to its semiparametric counterpart. We also investigate the use and benefit of localized bandwidths in the proposed mixture density of the errors.Bayes factors, kernel-form error density, localized bandwidths, Markov chain Monte Carlo, value-at-risk

    Estimation of Asymmetric Box-Cox Stochastic Volatility Models Using MCMC Simulation

    Get PDF
    The stochastic volatility model enjoys great success in modeling the time-varying volatility of asset returns. There are several specifications for volatility including the most popular one which allows logarithmic volatility to follow an autoregressive Gaussian process, known as log-normal stochastic volatility. However, from an econometric viewpoint, we lack a procedure to choose an appropriate functional form for volatility. Instead of the log-normal specification, Yu, Yang and Zhang (2002) assumed Box-Cox transformed volatility follows an autoregressive Gaussian process. However, the empirical evidence they found from currency markets is not strong enough to support the Box-Cox transformation against the alternatives, and it is necessary to seek further empirical evidence from the equity market. This paper develops a sampling algorithm for the Box-Cox stochastic volatility model with a leverage effect incorporated. When the model and the sampling algorithm are applied to the equity market, we find strong empirical evidence to support the Box-Cox transformation of volatility. In addition, the empirical study shows that it is important to incorporate the leverage effect into stochastic volatility models when the volatility of returns on a stock index is under investigation.Box-Cox transformation, leverage effect, sampling algorithm.

    Parameter estimation for a discrete-response model with double rules of sample selection: A Bayesian approach

    Get PDF
    We present a Bayesian sampling approach to parameter estimation in a discrete-response model with double rules of selectivity, where the dependent variables contain two layers of binary choices and one ordered response. Our investigation is motivated by an empirical study using such a double-selection rule for three labor-market outcomes, namely labor force participation, employment and occupational skill level. Full information maximum likelihood (FIML) estimation often encounters convergence problems in numerical optimization. The contribution of our investigation is to present a sampling algorithm through a new reparameterization strategy. We conduct Monte Carlo simulation studies and find that the numerical optimization of FIML fails for more than half of the simulated samples. Our Bayesian method performs as well as FIML for the simulated samples where FIML works. Moreover, for the simulated samples where FIML fails, Bayesian works as well as it does for the simulated samples where FIML works. We apply the proposed sampling algorithm to the double-selection model of labor-force participation, employment and occupational skill level. We derive the 95% Bayesian credible intervals for marginal effects of the explanatory variable on the three labor-force outcomes. In particular, the marginal effects of mental health factors on these three outcomes are discussed.Bayesian sampling; conditional posterior; marginal effects; mental illness; reparameterization.

    A Class of Nonlinear Stochastic Volatility Models and Its Implications on Pricing Currency Options

    Get PDF
    This paper proposes a class of stochastic volatility (SV) models which offers an alternative to the one introduced in Andersen (1994). The class encompasses all standard SV models that have appeared in the literature, including the well known lognormal model, and allows us to empirically test all standard specifications in a convenient way. We develop a likelihood-based technique for analyzing the class. Daily dollar/pound exchange rate data reject all the standard models and suggest evidence of nonlinear SV. An efficient algorithm is proposed to study the implications of this nonlinear SV on pricing currency options and it is found that the lognormal model overprices options.Box-Cox transformations, Stochastic volatility, MCMC, Exchange rate volatility, Option pricing.

    Estimation of Hyperbolic Diffusion Using MCMC Method

    Get PDF
    In this paper we propose a Bayesian method for estimating hyperbolic diffusion models. The approach is based on the Markov Chain Monte Carlo (MCMC) method after discretization via the Milstein scheme. Our simulation study shows that the hyperbolic diffusion exhibits many of the stylized facts about asset returns documented in the financial econometrics literature, such as slowly declining autocorrelation function of absolute terms. We demonstrate that the MCMC method provides a useful tool to analyze hyperbolic diffusions. In particular, quantities of posterior distributions obtained from MCMC outputs can be used for statistical inferences.Markov Chain Monte Carlo, Hyperbolic diffusion, Milstein approximation, ARCH, Long Memory.

    Bayesian Adaptive Bandwidth Kernel Density Estimation of Irregular Multivariate Distributions

    Get PDF
    Kernel density estimation is an important technique for understanding the distributional properties of data. Some investigations have found that the estimation of a global bandwidth can be heavily affected by observations in the tail. We propose to categorize data into low- and high-density regions, to which we assign two different bandwidths called the low-density adaptive bandwidths. We derive the posterior of the bandwidth parameters through the Kullback-Leibler information. A Bayesian sampling algorithm is presented to estimate the bandwidths. Monte Carlo simulations are conducted to examine the performance of the proposed Bayesian sampling algorithm in comparison with the performance of the normal reference rule and a Bayesian sampling algorithm for estimating a global bandwidth. According to Kullback-Leibler information, the kernel density estimator with low-density adaptive bandwidths estimated through the proposed Bayesian sampling algorithm outperforms the density estimators with bandwidth estimated through the two competitors. We apply the low-density adaptive kernel density estimator to the estimation of the bivariate density of daily stock-index returns observed from the U.S. and Australian stock markets. The derived conditional distribution of the Australian stock-index return for a given daily return in the U.S. market enables market analysts to understand how the former market is associated with the latter.conditional density; global bandwidth; Kullback-Leibler information; marginal likelihood; Markov chain Monte Carlo; S&P500 index

    A New Procedure For Multiple Testing Of Econometric Models

    Get PDF
    A significant role for hypothesis testing in econometrics involves diagnostic checking. When checking the adequacy of a chosen model, researchers typically employ a range of diagnostic tests, each of which is designed to detect a particular form of model inadequacy. A major problem is how best to control the overall probability of rejecting the model when it is true and multiple test statistics are used. This paper presents a new multiple testing procedure, which involves checking whether the calculated values of the diagnostic statistics are consistent with the postulated model being true. This is done through a combination of bootstrapping to obtain a multivariate kernel density estimator of the joint density of the test statistics under the null hypothesis and Monte Carlo simulations to obtain a p value using this kernel density. We prove that under some regularity conditions, the estimated p value of our test procedure is a consistent estimate of the true p value. The proposed testing procedure is applied to tests for autocorrelation in an observed time series, for normality, and for model misspecification through the information matrix. We find that our testing procedure has correct or nearly correct sizes and good powers, particular for more complicated testing problems. We believe it is the first good method for calculating the overall p value for a vector of test statistics based on simulation.Bootstrapping, consistency, information matrix test, Markov chain Monte Carlo simulation, multivariate kernel density, normality, serial correlation, test vector
    corecore